翻訳と辞書
Words near each other
・ Data Transformation Services
・ Data Transmission
・ Data transmission
・ Data transmission circuit
・ Data Transmission Network
・ Data Transport Utility
・ Data truncation
・ Data Tutashkhia
・ Data type
・ Data Universal Numbering System
・ Data URI scheme
・ Data validation
・ Data Validation and Certification Server
・ Data validation and reconciliation
・ Data vault
Data Vault Modeling
・ Data verification
・ Data virtualization
・ Data visualization
・ Data warehouse
・ Data warehouse appliance
・ Data warehouse architectures
・ Data Warehouse System Electronic Surveillance Data Management System
・ Data Web
・ Data wrangling
・ Data's Day
・ Data, context and interaction
・ Data-centric programming language
・ Data-centric security
・ Data-dependent jitter


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Data Vault Modeling : ウィキペディア英語版
Data Vault Modeling
Data Vault Modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that, apart from the modeling aspect, deals with issues such as auditing, tracing of data, loading speed and resilience to change.
Data Vault Modeling focuses on several things. First, it emphasizes the need to trace of where all the data in the database came from. This means that every row in a Data Vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source.
Second, it makes no distinction between good and bad data ("bad" meaning not conforming to business rules).〔Super Charge your data warehouse, page 74〕 This is summarized in the statement that a Data Vault stores "a single version of the facts" (also expressed by Dan Linstedt as "all the data, all of the time") as opposed to the practice in other data warehouse methods of storing "a single version of the truth"〔The next generation EDW〕 where data that does not conform to the definitions is removed or "cleansed".
Third, the modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes.〔Super Charge your data warehouse, page 21〕
Finally, Data Vault is designed to enable parallel loading as much as possible,〔Super Charge your data warehouse, page 76〕 so that very large implementations can scale out without the need for major redesign.
== History and philosophy ==

In data warehouse modeling there are two well-known competing options for modeling the layer where the data is stored. Either you model according to Ralph Kimball, with conformed dimensions and an enterprise data bus, or you model according to Bill Inmon with the database normalized. Both techniques have issues when dealing with changes in the systems feeding the data warehouse. For conformed dimensions you also have to cleanse data (to conform it) and this is undesirable in a number of cases since this inevitably will lose information. Data Vault is designed to avoid or minimize the impact of those issues, by moving them to areas of the data warehouse that are outside the historical storage area (cleansing is done in the data marts) and by separating the structural items (business keys and the associations between the business keys) from the descriptive attributes.
Dan Linstedt, the creator of the method, describes the resulting database as follows:
Data Vault's philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data is not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being "wrong" is an interpretation of the data that stems from a particular point of view that may not be valid for everyone, or at every point in time. Therefore the Data Vault must capture all data and only when reporting or extracting data from the Data Vault is the data being interpreted.
Another issue to which Data Vault is a response is that more and more there is a need for complete auditability and traceability of all the data in the datawarehouse. Due to Sarbanes-Oxley in the USA and similar measures in Europe this is a relevant topic for many business intelligence implementations, hence the focus of any Data Vault implementation is complete traceability and auditability of all information.
Data Vault 2.0 is the new specification, but is not an open standard.〔A short intro to#datavault 2.0〕 The new specification contains components which define the implementation best practices, the methodology (SEI/CMMI, Six Sigma, SDLC, etc..), the Architecture, and the model. Data Vault 2.0 has a focus on including new components such as Big Data, NoSQL - and also focuses on performance of the existing model. The old specification (documented here for the most part) is highly focused on Data Vault Modeling.
It is necessary to evolve the specification to include the new components, along with the best practices in order to keep the EDW and BI systems current with the needs and desires of today's businesses.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Data Vault Modeling」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.